We consider the idealized setting of gradient flow on the population risk for infinitely wide two-layer ReLU neural networks (without bias), and study the effect of symmetries on the learned parameters and predictors. We first describe a general class of symmetries which, when satisfied by the target function $f^*$ and the input distribution, are preserved by the dynamics. We then study more specific cases. When $f^*$ is odd, we show that the dynamics of the predictor reduces to that of a (non-linearly parameterized) linear predictor, and its exponential convergence can be guaranteed. When $f^*$ has a low-dimensional structure, we prove that the gradient flow PDE reduces to a lower-dimensional PDE. Furthermore, we present informal and numerical arguments that suggest that the input neurons align with the lower-dimensional structure of the problem.
translated by 谷歌翻译
Neural networks trained to minimize the logistic (a.k.a. cross-entropy) loss with gradient-based methods are observed to perform well in many supervised classification tasks. Towards understanding this phenomenon, we analyze the training and generalization behavior of infinitely wide two-layer neural networks with homogeneous activations. We show that the limits of the gradient flow on exponentially tailed losses can be fully characterized as a max-margin classifier in a certain non-Hilbertian space of functions. In presence of hidden low-dimensional structures, the resulting margin is independent of the ambiant dimension, which leads to strong generalization bounds. In contrast, training only the output layer implicitly solves a kernel support vector machine, which a priori does not enjoy such an adaptivity. Our analysis of training is non-quantitative in terms of running time but we prove computational guarantees in simplified settings by showing equivalences with online mirror descent. Finally, numerical experiments suggest that our analysis describes well the practical behavior of two-layer neural networks with ReLU activations and confirm the statistical benefits of this implicit bias.
translated by 谷歌翻译
In a series of recent theoretical works, it was shown that strongly overparameterized neural networks trained with gradient-based methods could converge exponentially fast to zero training loss, with their parameters hardly varying. In this work, we show that this "lazy training" phenomenon is not specific to overparameterized neural networks, and is due to a choice of scaling, often implicit, that makes the model behave as its linearization around the initialization, thus yielding a model equivalent to learning with positive-definite kernels. Through a theoretical analysis, we exhibit various situations where this phenomenon arises in non-convex optimization and we provide bounds on the distance between the lazy and linearized optimization paths. Our numerical experiments bring a critical note, as we observe that the performance of commonly used non-linear deep convolutional neural networks in computer vision degrades when trained in the lazy regime. This makes it unlikely that "lazy training" is behind the many successes of neural networks in difficult high dimensional tasks.
translated by 谷歌翻译
This paper studies the infinite-width limit of deep linear neural networks initialized with random parameters. We obtain that, when the number of neurons diverges, the training dynamics converge (in a precise sense) to the dynamics obtained from a gradient descent on an infinitely wide deterministic linear neural network. Moreover, even if the weights remain random, we get their precise law along the training dynamics, and prove a quantitative convergence result of the linear predictor in terms of the number of neurons. We finally study the continuous-time limit obtained for infinitely wide linear neural networks and show that the linear predictors of the neural network converge at an exponential rate to the minimal $\ell_2$-norm minimizer of the risk.
translated by 谷歌翻译
轨迹推断旨在从其时间边缘的快照中恢复人群的动态。为了解决这项任务,Lavenant等人引入了相对于路径空间中的Wiener度量的最小渗透估计量。 ARXIV:2102.09204,并显示出从无限尺寸凸优化问题的解决方案中始终如一地恢复大型漂移扩散过程的动力学。在本文中,我们引入了无网算法来计算该估计器。我们的方法包括通过Schr \“ Odinger Bridges耦合的点云家族(每张快照),该桥也随着嘈杂的梯度下降而演变。我们研究了动力学的平均场限制,并证明了其与所需估计量的全局收敛。这导致了一种具有端到端理论保证的推理方法,可以解决轨迹推理的可解释模型。我们还提出了如何调整方法处理质量变化的方法,在处理单个单元RNA序列数据时,这是一个有用的扩展细胞可以分支并死亡。
translated by 谷歌翻译
为了理论上了解训练有素的深神经网络的行为,有必要研究来自随机初始化的梯度方法引起的动态。然而,这些模型的非线性和组成结构使得这些动态难以分析。为了克服这些挑战,最近出现了大宽度的渐近学作为富有成效的观点,并导致了对真实世界的深网络的实用洞察。对于双层神经网络,已经通过这些渐近学理解,训练模型的性质根据初始随机权重的规模而变化,从内核制度(大初始方差)到特征学习制度(对于小初始方差)。对于更深的网络,更多的制度是可能的,并且在本文中,我们详细研究了与神经网络的“卑鄙字段”限制相对应的“小”初始化的特定选择,我们称之为可分配的参数化(IP)。首先,我们展示了标准I.I.D.零平均初始化,具有多于四个层的神经网络的可集参数,从无限宽度限制的静止点开始,并且不会发生学习。然后,我们提出了各种方法来避免这种琐碎的行为并详细分析所得到的动态。特别是,这些方法中的一种包括使用大的初始学习速率,并且我们表明它相当于最近提出的最大更新参数化$ \ mu $ p的修改。我们将结果与图像分类任务的数值实验确认,其另外显示出在尚未捕获的激活功能的各种选择之间的行为中的强烈差异。
translated by 谷歌翻译
在包括生成建模的各种机器学习应用中的两个概率措施中,已经证明了切片分歧的想法是成功的,并且包括计算两种测量的一维随机投影之间的“基地分歧”的预期值。然而,这种技术的拓扑,统计和计算后果尚未完整地确定。在本文中,我们的目标是弥合这种差距并导出切片概率分歧的各种理论特性。首先,我们表明切片保留了公制公理和分歧的弱连续性,这意味着切片分歧将共享相似的拓扑性质。然后,我们在基本发散属于积分概率度量类别的情况下精确结果。另一方面,我们在轻度条件下建立了切片分歧的样本复杂性并不依赖于问题尺寸。我们终于将一般结果应用于几个基地分歧,并说明了我们对合成和实际数据实验的理论。
translated by 谷歌翻译